perm filename EXPT.1[DIS,DBL] blob
sn#205059 filedate 1976-03-12 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 EXPERIMENTS with AM
C00009 ENDMK
Cā;
EXPERIMENTS with AM
1) Set the interestingness factor of all concepts to 200 initially.
Result: occasional wanderings, but still bursts of creative driving.
Cardinality in about 3 times as many cycles.
Conclusion: the int. factors of the concepts are useful for deciding
what to do in close situations, or where few good reasons exist,
but even 1 good reason is far more influential -- and rightly so!
2) Pick a random candidate to do next, but maintain INTHRESH as it is
(so the average job-list length is about 20). Also, leave the
interestingness factors of the concepts as they are normally (0-1000).
Result: on the average, it will take about 20 times as long to get to
a given job. On the other hand, several "good" jobs are sprinkled
around in the queue, so the performance is cut only by a small factor.
(timewise). On the other hand, behavior is much less focused, rational.
Typically, a "good" cand will be chosen, having reasons all of which
were true 10 cycles ago -- and which are clearly superior to those of
the last 10 Cands! This is what is so annoying to human onlookers.
Result: Since AM was frequently working on a low-value task, it was unwilling
to spend much time or space on it. So the mean time alotted per task
fell to about 15 seconds (from the typical 30 secs). Thus, the "losers"
were dealt with quickly, so the detriment to performance was softened.
In fact, many of these "failed" almost instantly (meaningless ones).
Conclusion: Picking (on the average) the 20th-best candidate impedes prgress
by a factor less than 20 (about 7), but it dramaticly degrades the
"sensibleness" of AM's behavior, the continuity of its actions.
Humans place a big value on absolute sensibleness, and believe that
doing something silly 50% of the time is MUCH worse than half as
productive as always doing the next most logical task.
Conclusion: having 20 multi-processors simultaneously execute the top 20
jobs will result in a gain of about 7 in the rate of "big" discoveries.
That is, not a full factor of 20, nor no gain at all.
3) Pick a random candidate to do next, and adjust INTHRESH so that no
candidate ever is excluded from the job-list, and set all ints. to 200.
Result: Many "explosive" tasks were chosen, and the number of new concepts
increased rapidly. As expected, most of these were real "losers".
There seemed no rationality to AM's sequence of actions, and it was quite
boring it watch it floundering so. The typical length of the agenda was
about 500, and AM's performance was "slowed" by at least a couple orders
of magnitude. A more subjective measure of its "intelligence" would say
that it totally collapsed under this random scheme.
Conclusion: Having 500 processors simultaneously execute all the jobs on
the agenda would increase AM's performance only by a factor of 10 or so.
The truly "intelligent" behavior is AM's plausible sequencing of tasks.
4) Modify the global formula assigining a priority value to each job. Let it still
be a function of the reasons for the job, but trivialize it:
let the priority of a job be defined as simply the number of reasons it has.
(normalize by multiplying by 100, and cut-off if over 1000).
This raisies the new question of what to do if several jobs all have the
same priority. I suppose the answer is to execute them in stack-order
(most recent first), since this is what AM will do anyway.
5) Eliminate "Equality", and see what AM does.
The reason for doing this is that AM discovered Cardinality via the
technique of generalizing the relation "Equality"-of-2-sets. What will
happen if we eliminate this path? Will AM rederive Equality? Will it get
to Cardinality via another route? Will it do some set-theoretic things?
6) General classes of expts: modify/add/eliminate certain concepts;
modify certain heuristics;
modify the strategy for choosing the next job/ value assigned to jobs.